Introduction

Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. Decision curve analysis is a method for evaluating and comparing prediction models that incorporates clinical consequences, requires only the data set on which the models are tested, and can be applied to models that have either continuous or dichotomous results. This document will walk you through how to perform a decision curve analysis (DCA) in many settings, and how to interpret the resulting curves. In DCA prediction models are compared to two default strategies: 1) assume that all patients are test positive and therefore treat everyone, or 2) assume that all patients are test negative and offer treatment to no one. “Treatment” is considered in the widest possible sense, not only drugs, radiotherapy or surgery, but advice, further diagnostic procedures or more intensive monitoring. For more details on DCA, visit www.decisioncurveanalysis.org. You’ll find the original articles explaining the details of the DCA derivation along with other papers providing more details. Below we will walk through how to perform DCA for binary and time-to-event outcomes using R, Stata, SAS, and Python. The code is provided for all languages and the results shown are the output from the R DCA functions.

Select a language

Tutorial is under construction. Check back soon!
R Stata SAS Python

Install & Load

Use the scripts below to install the DCA functions and/or load them for use.

R
# install dcurves, tidyverse, gtsummary, and broom from CRAN
install.packages(c("dcurves", "tidyverse", "gtsummary", "broom", "survival", "rsample"))

# load package
library(dcurves)
library(tidyverse)
library(gtsummary)


DCA for Binary Outcomes

Motivating Example

We will be working with the example data set containing information about cancer diagnoses. The data set includes information on 750 patients who have recently discovered they have a gene mutation that puts them at a higher risk for harboring cancer. Each patient has been biopsied and we know their cancer status. It is known that older patients with a family history of cancer have a higher probability of harboring cancer. A clinical chemist has recently discovered a marker that she believes can distinguish between patients with and without cancer. We wish to assess whether or not the new marker does indeed distinguish between patients with and without cancer. If the marker does indeed predict well, many patients will not need to undergo a painful biopsy.

Data Set-up

We will go through step by step how to import your data, build models based on multiple variables, and use those models to obtain predicted probabilities. The first step is to import your data. Once you have the data imported, you’ll want to begin building your model. As we have a binary outcome (i.e. the outcome of our model has two levels: cancer or no cancer), we will be using a logistic regression model.

R
# import data
df_cancer_dx <-
  readr::read_csv(
    file = "https://raw.githubusercontent.com/ddsjoberg/dca-tutorial/main/data/df_cancer_dx.csv"
  )

# summarize data
df_cancer_dx %>%
  select(-patientid) %>%
  tbl_summary(type = all_dichotomous() ~ "categorical")

Characteristic N = 7501
cancer
0 645 (86%)
1 105 (14%)
dead
0 719 (96%)
1 31 (4.1%)
risk_group
high 21 (2.8%)
intermediate 335 (45%)
low 394 (53%)
age 65.1 (61.7, 68.3)
famhistory
0 635 (85%)
1 115 (15%)
marker 0.64 (0.29, 1.38)
cancerpredmarker 0.06 (0.02, 0.18)

1 n (%); Median (IQR)


Univariate Decision Curve Analysis

First, we want to confirm family history of cancer is indeed associated with the biopsy result.

R
# build logistic regression model
mod1 <- glm(cancer ~ famhistory, data = df_cancer_dx, family = binomial)

# model summary
mod1_summary <- tbl_regression(mod1, exponentiate = TRUE)
mod1_summary

Characteristic OR1 95% CI1 p-value
famhistory 1.80 1.07, 2.96 0.022

1 OR = Odds Ratio, CI = Confidence Interval


Via logistic regression with cancer as the outcome, we can see that family history is related to biopsy outcome with odds ratio 1.80 (95% CI 1.07, 2.96; p=0.022). The DCA can help us address the clinical utility of using family history to predict biopsy outcome.

R
dca(cancer ~ famhistory, data = df_cancer_dx) %>%
  plot()


First, note that there are many threshold probabilities shown here that are not of interest. For example, it is unlikely that a patient would demand that they had at least a 50% risk of cancer before they would accept a biopsy. Let’s do the DCA again, this time restricting the output to threshold probabilities a more clinically reasonable range, between 0% and 35%.

R
dca(cancer ~ famhistory,
    data = df_cancer_dx,
    thresholds = seq(0, 0.35, 0.01)) %>%
  plot()


Now that the graph is showing a more reasonable range of threshold probabilities, let’s assess the clinical utility of family history alone. We can see here that although family history is significantly associated with biopsy outcome, it only adds value to a small range of threshold probabilities near 13% - 20%. If your personal threshold probability is 15% (i.e. you would undergo a biopsy if your probability of cancer was greater than 15%), then family history alone can be beneficial in making the decision to undergo biopsy. However, if your threshold probability is less than 13% or higher than 20%, then family history adds no more benefit than a biopsy all, or biopsy none scheme.

Multivariable Decision Curve Analysis

Evaluation of New Models

We wanted to examine the value of a statistical model that incorporates family history, age, and the marker. First we will build the logistic regression model with all three variables, and second we would have saved out the predicted probability of having cancer based on the model. Note that in our example data set, this variable actually already exists so it wouldn’t be necessary to create the predicted probabilities once again.

R
# build multivariable logistic regression model
mod2 = glm(cancer ~ marker + age + famhistory, data = df_cancer_dx, family=binomial)

# summarize model
tbl_regression(mod2, exponentiate = TRUE)

# add predicted values from model to data set
df_cancer_dx <-
  df_cancer_dx %>%
  mutate(
    cancerpredmarker =
      broom::augment(mod2, type.predict = "response") %>%
      pull(".fitted")
  )

Characteristic OR1 95% CI1 p-value
marker 2.66 2.12, 3.38 <0.001
age 1.33 1.25, 1.41 <0.001
famhistory 2.38 1.29, 4.32 0.005

1 OR = Odds Ratio, CI = Confidence Interval


We now want to compare our different approaches to cancer detection: biopsying everyone, biopsying no-one, biopsying on the basis of family history, or biopsying on the basis of a multivariable statistical model including the marker, age and family history of cancer.

R
dca(cancer ~ famhistory + cancerpredmarker,
    data = df_cancer_dx,
    thresholds = seq(0, 0.35, 0.01)) %>%
  plot(smooth = TRUE)


The key aspect of decision curve analysis is to look at which strategy leads to the largest net benefit (i.e. the “highest” line), which in this example would correspond to the model that includes age, family history of cancer, and the marker. It is clear that, across the range of reasonable threshold probabilities, one cannot go wrong basing decisions on this multivariate model: it is superior, and unlike any alternative strategy, it is never worse.

A few points are worth noting. First, look at the green line, the net benefit for “treat all”, that is, biopsy everyone. This crosses the y axis at the prevalence. Imagine that a man had a risk threshold of 14%, and asked his risk under the “biopsy all” strategy. He would be told that his risk was the prevalence (14%). When a patient’s risk threshold is the same as his predicted risk, the net benefit of biopsying and not biopsying are the same.

Second, the decision curve for the binary variable (family history of cancer, the brown line) crosses the “biopsy all men” line at 1 – negative predictive value and again, this is easily explained: the negative predictive value is 87%, so a patient with no family history of cancer has a probability of disease of 13%; a patient with a threshold probability less than this – for example, a patient who would opt for biopsy even if risk was 10% - should therefore be biopsied even if he/she had no family history of cancer. The decision curve for a binary variable is equivalent to biopsy no-one at the positive predictive value. This is because for a binary variable, a patient with the characteristic is given a risk at the positive predictive value.

Evaluation of Published Models

Imagine that a model was published by Brown et al. with respect to our cancer biopsy data set. The authors reported a statistical model with coefficients of 0.75 for a positive family history of cancer; 0.26 for each increased year of age, and an intercept of -17.5. To test this formula on our data set:

R
# Use the coefficients from the Brown model
df_cancer_dx <-
  df_cancer_dx %>%
  mutate(
    logodds_brown = 0.75 * famhistory + 0.26 * age - 17.5,
    phat_brown = exp(logodds_brown) / (1 + exp(logodds_brown))
  )

# Run the decision curve
dca(cancer ~ phat_brown,
    data = df_cancer_dx,
    thresholds = seq(0, 0.35, 0.01)) %>%
  plot(smooth = TRUE)


This decision curve suggests that although the model might be useful in the most risk averse patients, it is actually harmful in patients with more moderate threshold probabilities. As such, the Brown et al. model should not be used in clinical practice. This effect, a model being harmful, occurs due to miscalibration, that is, when patients are given risks that are too high or too low. Note that miscalibration only occurs rarely when models are created and tested on the same data set, such as in the example where we created a model with both family history and the marker.

Joint or Conditional Tests

Many decisions in medicine are based on joint or conditional test results. A classic example is where patients are categorized on the basis of a test as being at high, low or intermediate risk. Patients at high risk are referred immediately for treatment (in our example biopsied); patients at low risk are reassured and advised that no further action is necessary; patients at intermediate risk are sent for an additional test, with subsequent treatment decisions made accordingly.

Imagine that for our example there was a previous test that categorized our patients as high, low, and intermediate risk of cancer and we wanted to incorporate our marker. There are five clinical options:

  1. Biopsy everyone
  2. Biopsy no one
  3. Biopsy everyone that was determined to be at high risk of cancer; don’t use the marker
  4. Measure the marker for everyone, then biopsy anyone who is either at high risk of cancer or who was determined to have a probability of cancer past a certain level, based on the marker (i.e. joint approach)
  5. Biopsy everyone at high risk; measure the marker for patients at intermediate risk and biopsy those with a probability of cancer past a certain level, based on the marker (i.e. conditional approach)

Decision curve analysis can incorporate joint or conditional testing. All that is required is that appropriate variables are calculated from the data set; decision curves are then calculated as normal. First we would create the variables to represent our joint and conditional approach. For our example, let us use 0.15 as the cutoff probability level for patients who had their marker measured and should be biopsied.

R
#Create a variable for the strategy of treating only high risk patients
df_cancer_dx <-
  df_cancer_dx %>%
  mutate(
    # This will be 1 for treat and 0 for don’t treat
    high_risk = ifelse(risk_group=="high", 1, 0),
    # Treat based on Joint Approach
    joint = ifelse(risk_group=="high" | cancerpredmarker > 0.15, 1, 0),
    # Treat based on Conditional Approach
    conditional =
      ifelse(risk_group=="high" | (risk_group=="intermediate" & cancerpredmarker > 0.15), 1, 0)
  )


Now that we have the variables worked out, we can run the decision curve analysis.

R
dca(cancer ~ high_risk + joint + conditional,
    data = df_cancer_dx,
    thresholds = seq(0, 0.35, 0.01)) %>%
  plot(smooth = TRUE)


This appears to show that the joint test is the better option for the range of threshold probabilities from 5% - 24% since it has the highest net benefit across that range. Less than 5%, the clinical option of treating all would be superior to any other option, though rarely would treatment thresholds be so low. From 28%-35%, the conditional test would be a slightly better option, and in between the two ranges, the joint and conditional tests are comparable. The obvious disadvantage of the joint test is that the marker needs to be measured for everyone, and such tests may be expensive and time consuming.

Incorporating Harms into Model Assessment

To incorporate the harm of testing and measuring the marker, we ask a clinician, who tells us that, even if the marker were perfectly accurate, few clinicians would conduct more than 30 tests to predict one cancer diagnosis. This might be because the test is expensive, or requires some invasive procedure to obtain. The “harm” of measuring the marker is the reciprocal of 30, or 0.0333.

To construct the decision curves for each strategy we now incorporate harm. We have to calculate harm specially for the conditional test, because only patients at intermediate risk are measured for the marker. Then incorporate it into our decision curve. The strategy for incorporating harm for the conditional test is by multiplying the proportion scanned by the harm of the scan.

R
# the harm of measuring the marker is stored in a scalar
harm_marker <- 0.0333
# in the conditional test, only patients at intermediate risk
# have their marker measured
# harm of the conditional approach is proportion of patients who have the marker
# measured multiplied by the harm
harm_conditional <- mean(df_cancer_dx$risk_group == "intermediate") * harm_marker

# Run the decision curve
dca(cancer ~ high_risk + joint + conditional,
    data = df_cancer_dx,
    thresholds = seq(0, 0.35, 0.01),
    harm = list(joint = harm_marker, conditional = harm_conditional)) %>%
  plot(smooth = TRUE)


Here the conditional test is clearly the best option (above the 8% treatment threshold), in fact, once you take into account the harm of measuring the marker, it is clearly not worth measuring the marker for everyone: the net benefit of just treating high risk patients is often higher than that of the joint test.

Saving out Net Benefit Values

For any model assessed through decision curve analysis, if we also wanted to show the net benefits in a table, we could save them out. We would simply need to specify the name of the file we’d like it to be saved as. For a particular range of value, we would only need to specify what threshold to start, stop, and the increment we’d like to use. To assess or display the increase in net benefit we’d simply need to subtract the net benefit of the model based on treating all patients from the net benefit of our model.

Let us imagine that we want to view the net benefit of using only the marker to predict whether a patient has cancer, compared with the net benefits of biopsying all patients at thresholds of 5%, 10%, 15% … 35%.

For the model itself, we would actually need to first specify that the marker variable – unlike those of any of the models before – is not a probability. Based on our thresholds, we’d want to begin at 0.05, and by increments of 0.05, stop at 0.35. As we are not interested in the graph, we can also specify to suppress the graph from being displayed.

R
dca(cancer ~ marker,
    data = df_cancer_dx,
    as_probability = "marker",
    thresholds = seq(0.05, 0.35, 0.15)) %>%
  as_tibble() %>%
  select(label, threshold, net_benefit) %>%
  gt::gt() %>%
  gt::fmt_percent(columns = threshold, decimals = 0) %>%
  gt::fmt(columns = net_benefit, fns = function(x) style_sigfig(x, digits = 3)) %>%
  gt::cols_label(label = "Strategy",
                 threshold = "Decision Threshold",
                 net_benefit = "Net Benefit") %>%
  gt::cols_align("left", columns = label)

Strategy Decision Threshold Net Benefit
Treat All 5% 0.095
Treat All 20% -0.075
Treat All 35% -0.323
Treat None 5% 0.000
Treat None 20% 0.000
Treat None 35% 0.000
marker 5% 0.095
marker 20% 0.029
marker 35% 0.007

The saved table lists the range of threshold probability that we specified, followed by the net benefits of treating all, none, our specified model, intervention avoided (we discuss this below), and the newly created variable which represent the increase in net benefits of our model using only the marker.

Net benefit has a ready clinical interpretation. The value of 0.03 at a threshold probability of 20% can be interpreted as: “Comparing to conducting no biopsies, biopsying on the basis of the marker is the equivalent of a strategy that found 3 cancers per hundred patients without conducting any unnecessary biopsies.”

Interventions Avoided

As part of assessing the usefulness of the marker, we would be interested in whether using this marker to identify patients with and without cancer would help reduce unnecessary biopsies. This value was the “intervention avoided” column in the table that was saved out. To view it graphically, we would only need to specify it in our command.

R
dca(cancer ~ marker,
    data = df_cancer_dx,
    as_probability = "marker",
    thresholds = seq(0.05, 0.35, 0.01)) %>%
  net_intervention_avoided() %>%
  plot(smooth = TRUE)


At a probability threshold of 15%, the net reduction in interventions is about 33 per 100 patients. In other words, at this probability threshold, biopsying patients on the basis of the marker is the equivalent of a strategy that reduced the biopsy rate by 33%, without missing any cancers.

DCA for Survival Outcomes

Motivating Example

Patients had a marker measured and were followed to determine whether they were eventually diagnosed with cancer, as well as the time to that diagnosis or censoring. We want to build a model of our own based on age, family history, and the marker, and assess how well the model predicts cancer diagnosis within 1.5 years.

Basic Data Set-up

We first need to import our data.

R
# import data
df_time_to_cancer_dx <-
  readr::read_csv(
    file = "https://raw.githubusercontent.com/ddsjoberg/dca-tutorial/main/data/df_time_to_cancer_dx.csv"
  )


The survival probability to any time-point can be derived from any type of survival model; here we use a Cox as this is the most common model in statistical practice. The formula for a survival probability from a Cox model is given by

\[ S(t|X) = S_0(t)^{e^{X\beta}} \]

Where \(X\) is matrix of covariates in the Cox model, \(\beta\) is a vector containing the parameter estimates from the Cox model, and \(S_0(t)\) is the baseline survival probability to time t.

To get such values within our code, we will run a Cox model with age, family history, and the marker, as predictors, save out the baseline survival function in a new variable, and obtaining the linear prediction from the model for each subject.

We then obtain the baseline survival probability to our time point of interest. If no patient was observed at the exact time of interest, we can use the baseline survival probability to the observed time closest to, but not after, the time point. We can then calculate the probability of failure at the specified time point. For our example, we will use a time point of 1.5, which would corresponds to the eighteen months that we are interested in.

R
# Load survival library
library(survival)

# Run the cox model
coxmod = coxph(Surv(ttcancer, cancer) ~ age + famhistory + marker, data = df_time_to_cancer_dx)

df_time_to_cancer_dx <-
  df_time_to_cancer_dx %>%
  mutate(
    pr_failure18 =
      1 - summary(survfit(coxmod, newdata = df_time_to_cancer_dx), times=1.5)$surv[1,]
  )


The code for running the decision curve analysis is straightforward after the probability of failure is calculated. All we have to do is specify the time point we are interested in. For our example, let us not only set the threshold from 0% to 50%, but also add smoothing. Note that different programs use different smoothers as there is no one smoother that is best in every situation. As such, results of a smoothed curve should always be compared with the unsmoothed curve to ensure accuracy.

R
dca(Surv(ttcancer, cancer) ~ pr_failure18,
    data = df_time_to_cancer_dx,
    time = 1.5,
    thresholds = seq(0, 0.5, 0.01)) %>%
  plot(smooth = TRUE)


This shows that using the model to inform clinical decisions will lead to superior outcomes for any decision associated with a threshold probability of above 2% or so.

DCA with Competing Risks

At times, data sets are subject to competing risks. For example in our cancer data set, patients may have died prior to cancer diagnosis. To run a competing risk analysis, we first create a failure variable that indicates which patients died before a cancer diagnosis.

R
df_time_to_cancer_dx <-
  df_time_to_cancer_dx %>%
  mutate(
    cancer_cr =
      factor(cancer_cr,
             levels = c("censor", "diagnosed with cancer", "dead other causes"))
  )

dca(Surv(ttcancer, cancer_cr) ~ pr_failure18,
    data = df_time_to_cancer_dx,
    time = 1.5,
    thresholds = seq(0, 0.5, 0.01)) %>%
  plot(smooth = TRUE)


DCA in a Case-Control Design

An issue with applying decision curve analysis to case-control data is that net benefit depends on prevalence, and prevalence in case-control studies is fixed by design. When working with case-control data, you can pass the population prevalence to accurately calculate the net benefit.

R
# import data
df_cancer_dx_case_control <-
  readr::read_csv(
    file = "https://raw.githubusercontent.com/ddsjoberg/dca-tutorial/main/data/df_cancer_dx_case_control.csv"
  )

# summarize data
df_cancer_dx_case_control %>%
  select(-patientid) %>%
  tbl_summary(by = casecontrol,
              type = all_dichotomous() ~ "categorical") %>%
  modify_spanning_header(all_stat_cols() ~ "**Case-Control Status**")

Characteristic Case-Control Status
0, N = 3301 1, N = 4201
risk_group
high 5 (1.5%) 16 (3.8%)
intermediate 120 (36%) 215 (51%)
low 205 (62%) 189 (45%)
age 64.0 (60.9, 67.0) 65.8 (61.9, 69.1)
famhistory
0 292 (88%) 343 (82%)
1 38 (12%) 77 (18%)
marker 0.58 (0.26, 1.10) 0.71 (0.32, 1.57)
cancerpredmarker 0.04 (0.02, 0.11) 0.09 (0.03, 0.27)

1 n (%); Median (IQR)


In the example below, the population prevalence is 20%.

R
dca(casecontrol ~ cancerpredmarker,
    data = df_cancer_dx_case_control,
    prevalence = 0.20,
    thresholds = seq(0, 0.5, 0.01)) %>%
  plot(smooth = TRUE)


In this example, the model performs worse than a biopsy all strategy from threshold 0% to 17%, and the model has benefit from 18% to 40%.

Correction for Overfit

As the model created from a data set may include random error (noise) instead of just the relationships between the outcome and predictors, we should take care to correct for overfit when comparing it to other models. One such way is by using repeated 10-fold cross validation. The main steps are as follows:

  1. Randomly divide the data set into 10 groups of equal size, with equal numbers of events in each group.
  2. Fit the model using all but the first group.
  3. Apply the model created from all but the first group (in step 2), to the first group to obtain the predicted probability of the event.
  4. Repeat steps (2) and (3) leaving out and then applying the fitted model for each of the groups. Every subject now has a predicted probability of the event.
  5. Using the predicted probabilities, compute the net benefit at various threshold probabilities. One approach is to repeat this process many times and take an average.
  6. Although this step is not always done, we will include code for it here. Repeat steps (1) to (5) 200 times. The corrected net benefit for each threshold probability is the mean across the 200 replications.

R
# set seed for random process
set.seed(112358)

# create a 10-fold cross validation set
rsample::vfold_cv(df_cancer_dx, v = 10, repeats = 1) %>%
  # for each cut of the data, build logistic regression on the 90% (analysis set),
  # and perform DCA on the 10% (assessment set)
  rowwise() %>%
  mutate(
    # build regression model on analysis set
    glm_analysis =
      glm(cancer ~ marker + age + famhistory,
          data = rsample::analysis(splits),
          family=binomial) %>%
      list(),
    # get predictions for assessment set
    df_assessment =
      broom::augment(
        glm_analysis,
        newdata = rsample::assessment(splits),
        type.predict = "response"
      ) %>%
      list(),
    # calculate net benefit on assessment set
    dca_assessment =
      dca(cancer ~ .fitted,
          data = df_assessment,
          thresholds = seq(0, 0.35, 0.01),
          label = list(.fitted = "Cross-validated Prediction Model")) %>%
      as_tibble() %>%
      list()
  ) %>%
  # pool results from the 10-fold cross validation
  pull(dca_assessment) %>%
  bind_rows() %>%
  group_by(variable, label, threshold) %>%
  summarise(net_benefit = mean(net_benefit), .groups = "drop") %>%
  # plot cross validated net benefit values
  ggplot(aes(x = threshold, y = net_benefit, color = label)) +
  stat_smooth(method = "loess", se = FALSE, formula = "y ~ x", span = 0.2) +
  coord_cartesian(ylim = c(-0.014, 0.14)) +
  scale_x_continuous(labels = scales::percent_format(accuracy = 1)) +
  labs(x = "Threshold Probability", y = "Net Benefit", color = "") +
  theme_bw()